high-dimensional similarity
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Germany > Baden-Württemberg > Karlsruhe Region > Heidelberg (0.04)
A Parametric UMAP's sampling and effective loss function In Parametric UMAP [
The loss is computed for this mini-batch and then the parameters of the neural network are updated via stochastic gradient descent. UMAP: First, since automatic differentiation is used, not only the head of a negative sample edge is repelled from the tail but both repel each other. Second, the same number of edges are sampled in each epoch. This leads to a different repulsive weight for Parametric UMAP as described in Theorem A.1. Parametric UMAP's negative sampling is uniform from a batch that is itself sampled Since UMAP's implementation considers a point its first nearest neighbor, but the C Computing the expected gradient of UMAP's optimization procedure In this appendix, we show that the expected update in UMAP's optimization scheme does not It is continuously differentiable unless two embedding points coincide.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Germany > Baden-Württemberg > Karlsruhe Region > Heidelberg (0.04)
On UMAP's true loss function
Damrich, Sebastian, Hamprecht, Fred A.
UMAP has supplanted t-SNE as state-of-the-art for visualizing high-dimensional datasets in many disciplines, but the reason for its success is not well understood. In this work, we investigate UMAP's sampling based optimization scheme in detail. We derive UMAP's effective loss function in closed form and find that it differs from the published one. As a consequence, we show that UMAP does not aim to reproduce its theoretically motivated high-dimensional UMAP similarities. Instead, it tries to reproduce similarities that only encode the shared $k$ nearest neighbor graph, thereby challenging the previous understanding of UMAP's effectiveness. Instead, we claim that the key to UMAP's success is its implicit balancing of attraction and repulsion resulting from negative sampling. This balancing in turn facilitates optimization via gradient descent. We corroborate our theoretical findings on toy and single cell RNA sequencing data.